1. What is hierarchical planning and explain the method presented during the course.

    Hierarchical planning is a planning technique that is very efficient in case of problems with complex domains. Hierarchical planners are search algorithms that are able to create plans on different levels of abstraction. This feature is particularly evident when it comes to the types of operators: atomic (elementary actions) and macro operators (a set of elementary actions).

    Given a goal, a hierarchical planner performs a meta-level search to generate a meta-level plan which leads from a state very close to the initial one, to a state which is very close to the goal. The plan is then completed with a low level search, where the planner considers more details that were left behind in the previous steps. A hierarchical planner must be able to both organize the plan in a high-level perspective and to expand abstract plans into more concrete plans.

    We studied two algorithms for hierarchical planning: STRIPS-like and Partial Order.

    The first algorithm assigns a criticality value to each precondition. Then it fully explores the space at a certain level of abstraction before moving into one that has more details → but at every level of abstraction, it generates a complete plan.

    The partial order like algorithm uses macro operators te choose, at each step, if it is more convenient to open a goal with an operator or expand a macro step of the plan.

  2. What is Particle Swarm Optimization and which are its main features?

    Particle swarm optimization is a search technique that emulates the interaction mechanisms between individuals that compose a swarm. Sometimes the swarm has a common goal. Each individual of this swarm can choose between the two alternatives, either move away from the group (individualistic choice) or stay in the group (social choice). Generally, if more than one individual entity moves towards the goal, than other members gradually starts to follow him and do the same. The information propagates to all the members of the swarm. PSO can be used to solve optimization problems. In a search, stretch strategy can be found as a balance between exploration(individual agents that look for a solution) and exploitation (the social behaviour). A feature that appears to be important is the concept of proximity or subgroups (so called neighbourhood or social network). How to find the best solution? We use as a parameter the best position found so far and we define a function which we call fitness function that has to be maximised (or a cost function that has to be minimised). The goal is to find global maximum in the search space.

  3. What are non-informed search strategies? Describe the strategies that have been presented during the course.

    Uninformed search strategies are also calling called blind source strategies because we basically don't have any knowledge about the domain where we're exploring. We usually use brute force to find the solution, proceeding step by step in the search space. When compared to the informed search strategies, uninformed search strategies are generally more complex in terms of time and space and also more time consuming.

    We've seen during the course the BFS, DFS, the limited depth search and the iterative deepening, the uniform cost.

    The first one always explored the shallowest unexpanded node using a FIFO queue for the frontier. The DFS always expands the deepest node first in the frontier and uses a LIFO queue. The limited depth search is a DFS variant that includes a parameter which we call maximum depth parameter. Then when depth is reached (and we haven’t found a goal) the strategy explores alternative paths with a technique called backtracking. This technique also helps avoiding infinite branches. The iterative deepening. Combines the advantages of both BFS and DFS and. The uniform costs search expands the node with the lowest path cost, and in this case the frontier is a priority queue ordered by function G, which we call the path cost of that node.

  4. What are the main features of a swarm intelligence algorithm?

    The main features of a swarm intelligence (SI) algorithm are:

    1. Simple Agents with Limited Capacities: The system consists of a population of simple individuals (agents) that operate based on simple rules.
    2. No Awareness of Global View: Individual agents do not have a complete overview of the system or the global objective.
    3. Local Communication: Agents communicate locally, either directly or indirectly (stigmergic communication, such as pheromone trails in ants).
    4. Distributed Computation: There is no centralized control; the behavior emerges from local interactions among agents.
    5. Robustness: SI systems are inherently robust, as the failure of a few agents typically does not prevent the system from functioning.
    6. Adaptivity: The system adapts to changes in the environment or task requirements over time.
  5. What is conditional planning and which are its main features? + limitations

    A conditional planner is a search algorithm that generates various alternative plans for each point of uncertainty in the plan. A conditional planner is constituted by:

    • Causal actions
    • sensing action (for retrieving new information)
    • a number of partial plans; only one of them will be executed depending on the results of the observations made by the sensing actions.

    The main flaw of conditional planners is that they generate an exponential number of plans and the tree might as well “explode” in those cases with a high number of alternative contexts. This is reflected in the memory requirements.

  6. What is the modal truth criterion and why it has been defined?

    The modal truth criterion is a construction process that guarantees the completeness of a planner (because “promotion” and “demotion” techniques, alone, cannot). This criterion interleaves the goal achievement steps with some threat protection steps.

    • Establishment: Resolving a threat by achieving an open goal through one of the following means:
      • Inserting a new action into the plan that satisfies the threatened precondition.
      • Adding an ordering constraint between the threatening action and an existing action in the plan, ensuring the threat is handled.
      • Assigning a variable to ensure the threatened precondition is satisfied.
    • Promotion: Resolving a threat by imposing an ordering constraint that positions the threatening action before the start of the causal link being threatened. This ensures the negative effect of the threat does not interfere with the precondition to be satisfied.
    • Demotion: Resolving a threat by imposing an ordering constraint that positions the threatening action after the end of the causal link being threatened. This ensures the satisfied precondition remains unaffected.
    • White Knight: Inserting a new action or utilizing an existing one in the plan and positioning it between the threatening action (S1) and the threatened action (S3). This intermediary action establishes the precondition of S3, neutralizing the negative effect caused by S1.
    • Separation: Introducing non-codesignation constraints between the variables of the negative effect and the threatened precondition. This prevents unification and avoids the threat. This method is particularly useful when the variables have not yet been instantiated.
  7. What is ant colony optimization?

    Ant Colony Optimization (ACO) is a swarm intelligence algorithm inspired by the foraging behavior of ants. Ants deposit pheromones on paths, and the probability of choosing a path increases with higher pheromone levels.

    Key Principles:

    1. Pheromone Trails: Better (shorter) solutions accumulate more pheromones.
    2. Positive Feedback: Promising paths are reinforced as more ants follow them.
    3. Pheromone Evaporation: Reduces the influence of less promising paths, encouraging exploration.
    4. Stochastic Solution Construction: Each ant builds solutions probabilistically based on pheromone levels (memory) and heuristics (local information).

    ACO balances exploration and exploitation to find optimal solutions.

  8. What are the main features of iterative deepening?

    Iterative deepening is a DFS algorithm variant or, more precisely, a Limited depth search variant. Iterative deepening avoids the problem of choosing a maximum depth parameter, by trying all possible depth limits - starting with depth=1 and iteratively incrementing. Iterative deepening combines the advantages of BFS and DFS while avoiding infinite branches. If the step cost is 1, the algorithm is also optimal, and complete if the branching factor is a finite number. Time complexity O(bd) and space complexity O(bd).

  9. What are the main approaches of deductive planning? Explain the main differences.

    The main deductive planning approaches :

    • Logical Representation: Includes general approaches like theorem proving and specific methods such as Kowalsky’s formulation and situation calculus, which model states as logical snapshots of the world and their evolution through actions. A key challenge is the frame problem, which requires explicitly stating which facts remain unchanged after an action, leading to scalability issues in complex domains.
    • STRIPS: Uses a simplified structure with preconditions, add lists, and delete lists to represent actions, avoiding the frame problem by assuming that only the fluents listed in the add and delete lists are affected by actions.

    Key Differences: Logical approaches, including situation calculus and Kowalsky’s, are more expressive but computationally demanding. STRIPS is simpler and faster. Logical methods fit controlled and complex environments, while STRIPS works well for scalable, practical domains.

  10. What are metaheuristics? Describe the main algorithm that have been presented during the course.

    Metaheuristics are high-level strategies designed to guide underlying heuristics to search efficiently through large, complex spaces of potential solutions. They address the limitations of basic local search methods by introducing mechanisms to overcome local optima and explore the solution space more comprehensively. Metaheuristics treat the optimization problem as a search process over a graph, where:

    • Nodes represent possible solutions.
    • Edges represent transitions between solutions, typically defined by neighbourhood moves.
    1. Simulated Annealing (SA): Inspired by statistical mechanics, it introduces a probabilistic mechanism to accept worse solutions. The likelihood of accepting such moves decreases as the "temperature" is gradually reduced.
    2. Tabu Search (TS): Employs a "tabu list" to record recent solutions or moves, preventing cycling and encouraging exploration of new areas. Aspiration criteria allow overriding tabu restrictions if a move leads to a significantly better solution.
    3. Iterated Local Search (ILS): Combines local search with perturbation mechanisms to escape local optima. After reaching a local optimum, the algorithm perturbs the solution to explore new regions and refines the new solution using local search.
    4. Genetic Algorithms (GA): Inspired by evolution, GAs work on a population of solutions and evolve them through selection, crossover, and mutation. This approach is particularly effective for problems where solutions can be composed of smaller, reusable building blocks.
  11. What is arc consistency? Describe the algorithm to achieve it. Explain the properties of values that are remove from constraints and of values that are left in the domain.

    Arc Consistency in CSPs ensures that for every value in the domain of one variable, there is at least one corresponding value in the domain of the related variable that satisfies their constraint. If no such value exists, the value is removed.

    Algorithm (AC-3):

    1. Initialize: Add all arcs a(i,j) to a queue.
    2. Process Queue: For each arc a(i,j):
      • For each value xDi, check if there exists a supporting value yDj that satisfies the constraint.
      • If no such y exists, remove x from Di.
    3. Update Queue: If Di changes, re-check arcs involving Xi.

    Properties:

    • Removed Values: A value is removed if no supporting value exists in the related variable's domain.
    • Remaining Values: A value stays if there is at least one valid supporting value in the other variable's domain.
  12. What is Breadth-First Search? Describe this search strategy and discuss its completeness and complexity.

    BFS is an uninformed search strategy. The search starts with the expansion of the root node, then all the successors of the root node are expanded, then their successors and so forth. Generally, the shallowest unexpanded node is chosen for expansion. BFS uses a FIFO queue for the frontier (queueing function: add successors at the end of the queue). The goal test is applied to each node only when it is selected for expansion.
    BFS is complete if the branching factor is finite. The complexity in time and space is O(bd) and the algorithm is optimal only if the step cost is 1, but not optimal in general.

  13. What are the main features of a local search algorithm?

    The characteristic feature of local search algorithms is their way of reaching a solution; they start from an initial solution and iteratively try to improve it through local moves. In greedy approaches, a move is only performed if the solution it produces is better than the current one.

    This kind of behavior defines the concept of neighborhood (the larger the neighborhood, the more likely a local minimum is a global minimum). The algorithm search this “area” until it finds a local (or global if we’re lucky) minimum. Unfortunately, this kind of algorithms don’t remember the search states already reached. Another problem is the search space, that could block the algorithm with local minimum, ridges and plateaus.